Next: Grouping and copying Up: The implementation Previous: Placing geometry

Editing

  SKETCH supports multiple techniques for editing geometry. Some exploit paper and pencil editing techniques by recognizing editing gestures composed of strokes (e.g., oversketching and drawing shadows). Others use gestures that contain an interactor to transform shapes as a whole by translation or rotation.

Resizing. A common way to ``resize'' a surface with pencil and paper is to sketch back and forth over its bounding lines until they are of the right size. SKETCH recognizes a similar ``oversketching'' gesture to reshape objects. If two approximately coincident lines are drawn in opposite directions nearly parallel to an existing edge, SKETCH infers a resizing operation (Figure 4). This sketching operation works for all primitives constructed of straight line segments, including cubes, cylinders and extrusions. Additionally, the two endpoints of an extrusion path can be attached to two objects in the scene; whenever either object moves, the extrusion will resize to maintain the span. However, general reshaping of objects defined by freehand curves is more difficult and not yet fully implemented. We are currently adapting Baudel's mark-based interaction paradigm [3] for use in reshaping 3D objects.

  
Figure 4:Resizing an object is done by ``oversketching'' one edge with two parallel strokes in opposite directions. (see legend)

Sketching shadows. Shadows are an important cue for determining the depth of an object in a scene [29]. In SKETCH, we exploit this relationship by allowing users to edit an object's position by drawing its shadow. The gesture for this is first to stroke a dot over an object, and then to stroke its approximate shadow -- a set of impressionistic line strokes -- on another surface using the Shift modifier key.(5) The dot indicates which object is being shadowed, and the displacement of the shadow from the object determines the new position for the object (as if there were a directional light source directed opposite to the normal of the surface on which the shadow is drawn). The resulting shadow is also ``interactive'' and can be manipulated as described by Herndon et al. [13].

Transforming. Objects can be transformed as a unit by using a ``click-and-drag'' interactor (with the second mouse button): the click determines the object to manipulate, and the drag determines the amount and direction of the manipulation. By default, objects are constrained to translate, while following the cursor, along the locally planar surface on which they were created. However, this motion can be further constrained or can be converted to a rotational motion.

It is important to keep in mind that our interaction constraints are all very simple. In SKETCH, instead of using a constraint solver capable of handling a wide variety of constraints, we associate an interaction handler with each geometric object. This handler contains constraint information including which plane or axis an object is constrained to translate along, or which axis an object is constrained to rotate about. Then when the user manipulates an object, all of the mouse motion data is channeled through that object's handler which converts the 2D mouse data into constrained 3D transformations. To define which of our simple constraints is active, we require that the user explicitly specify the constraint with a gesture. Whenever a new constraint is specified for an object it will persist until another constraint is specified for that object. Each new constraint for an object overwrites any previous constraint on the object.

The advantages of such a simple constraint system are that it is robust, fast, and easy to understand. A more sophisticated constraint engine would allow a greater variety of constrained manipulation, but it would also require that the user be aware of which constraints were active and how each constraint worked. It would also require additional gestures so that the user could specify these other constraints.

Systems such as Kurlander and Feiner's [14] attempt to infer constraints from multiple drawings, but this approach has the drawback that multiple valid configurations of the system need to be made in order to define a constraint. Such approaches may also infer constraints that the user never intended, or may be too limited to be able to infer constraints that a user wants.

Constrained transformation. The gestures for constraining object transformations to single-axis translation or rotation, or to plane-aligned translation are composed of a series of strokes that define the constraint, followed by a ``click-and-drag'' interactor to perform the actual manipulation (Figure 5). To constrain the motion of an object to an axis-aligned axis, the user first strokes a constraint axis, then translates the object with an interactor by clicking and dragging parallel to the constraint axis. The constraint axis is stroked just as if a new piece of geometry were being constructed; however, since this stroke is followed by an interactor, a translation gesture is recognized and no geometry is created. Similarly, if the user drags perpendicular to the constraint axis instead of parallel to it, the gesture is interpreted as a single axis rotation. (This gesture roughly corresponds to the motion one would use in the real world to rotate an object about an axis.)

  
Figure 5:Sketching constraints on objects. 1D translation, 1D rotation and 2D translation. (see legend)

To translate in one of the three axis-aligned planes, two perpendicular lines must be stroked on an object. The directions of these two lines determine the plane in which the object is constrained to translate. If the two perpendicular lines are drawn over a different object from the one manipulated, they are interpreted as a contact constraint (although non-intuitive, this gesture is effective in practice). This forces the manipulated object to move so that it is always in contact with some surface in the scene (but not necessarily the object over which the gesture was drawn) while tracking the cursor. Finally, a dot stroke drawn on an object before using an interactor is interpreted as the viewing vector; the object will be constrained to translate along this vector. This constraint is particularly useful for fine-tuning the placement of an object if SKETCH has placed it at the ``wrong'' depth; however, since we use an orthographic view that does not automatically generate shadows, feedback for this motion is limited to seeing the object penetrate other objects in the scene. We believe that a rendering mode in which shadows were automatically generated for all objects would be beneficial, although we have not implemented such a mode because of the expected computational overhead. We did, however, mock up a rendering mode in which just the manipulated object automatically cast its shadow on the rest of the scene. People in our group generally found the shadow helpful, but were slightly disturbed that none of the other objects cast shadows.

In each case, the manipulation constraint, once established, is maintained during subsequent interactions until a new constraint is drawn for that object. The only exception is that single axis rotation and single axis translation constraints can both be active at the same time; depending on how the user gestures -- either mostly parallel to the translation axis or mostly perpendicular to the rotation axis -- a translation or rotation operation, respectively, is chosen.

Finally, objects are removed from the scene by clicking on them with an interactor gesture. In early versions of SKETCH we used an apparently more natural gesture to remove objects: the user ``tossed'' them away by translating them with a quick throwing motion, as one might brush crumbs from a table. We found, however, that this gesture presented a complication: it was too easy to toss out the wrong object, especially if its screen size were small.

Next: Grouping and copying Up: The implementation Previous: Placing geometry